A Proximal Point Algorithm for Log-Determinant Optimization with Group Lasso Regularization

نویسندگان

  • Junfeng Yang
  • Defeng Sun
  • Kim-Chuan Toh
چکیده

We consider the covariance selection problem where variables are clustered into groups and the inverse covariance matrix is expected to have a blockwise sparse structure. This problem is realized via penalizing the maximum likelihood estimation of the inverse covariance matrix by group Lasso regularization. We propose to solve the resulting log-determinant optimization problem with the classical proximal point algorithm (PPA). At each iteration, as it is difficult to update the primal variables directly, we first solve the dual subproblem by an inexact semismooth Newton-CG method and then update the primal variables by explicit formulas based on the computed dual variables. We also propose to accelerate the PPA by an inexact generalized Newton’s method when the iterate is close to the solution. Theoretically, we prove that at the optimal solution, the nonsingularity of the generalized Hessian matrices of the dual subproblem is equivalent to the constraint nondegeneracy condition for the primal problem. Global and local convergence results are also presented for the proposed PPA. Moreover, based on the augmented Lagrangian function of the dual problem we derive an alternating direction method (ADM), which is easily implementable and is demonstrated to be efficient for random problems. Numerical results, including comparisons with the ADM on both synthetic and real data, are presented to demonstrate that the proposed Newton-CG based PPA is stable and efficient and, in particular, outperforms the ADM when high accuracy is required.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Fixed-point Proximity Approach to Solving the Support Vector Regression with the Group Lasso Regularization

We introduce an optimization model of the support vector regression with the group lasso regularization and develop a class of efficient two-step fixed-point proximity algorithms to solve it numerically. To overcome the difficulty brought by the non-differentiability of the group lasso regularization term and the loss function in the proposed model, we characterize its solutions as fixed-points...

متن کامل

An Alternating Proximal Splitting Method with Global Convergence for Nonconvex Structured Sparsity Optimization

In many learning tasks with structural properties, structured sparse modeling usually leads to better interpretability and higher generalization performance. While great efforts have focused on the convex regularization, recent studies show that nonconvex regularizers can outperform their convex counterparts in many situations. However, the resulting nonconvex optimization problems are still ch...

متن کامل

Gradient Descent with Proximal Average for Nonconvex and Composite Regularization

Sparse modeling has been highly successful in many realworld applications. While a lot of interests have been on convex regularization, recent studies show that nonconvex regularizers can outperform their convex counterparts in many situations. However, the resulting nonconvex optimization problems are often challenging, especially for composite regularizers such as the nonconvex overlapping gr...

متن کامل

Solving Log-Determinant Optimization Problems by a Newton-CG Primal Proximal Point Algorithm

We propose a Newton-CG primal proximal point algorithm for solving large scale log-determinant optimization problems. Our algorithm employs the essential ideas of the proximal point algorithm, the Newton method and the preconditioned conjugate gradient solver. When applying the Newton method to solve the inner sub-problem, we find that the log-determinant term plays the role of a smoothing term...

متن کامل

Accelerated Stochastic Gradient Method for Composite Regularization

Regularized risk minimization often involves nonsmooth optimization. This can be particularly challenging when the regularizer is a sum of simpler regularizers, as in the overlapping group lasso. Very recently, this is alleviated by using the proximal average, in which an implicitly nonsmooth function is employed to approximate the composite regularizer. In this paper, we propose a novel extens...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • SIAM Journal on Optimization

دوره 23  شماره 

صفحات  -

تاریخ انتشار 2013